RelationalGroupedDataset (Spark 3.5.0 JavaDoc) 您所在的位置:网站首页 pyspark groupby map RelationalGroupedDataset (Spark 3.5.0 JavaDoc)

RelationalGroupedDataset (Spark 3.5.0 JavaDoc)

2024-02-29 19:21| 来源: 网络整理| 查看: 265

Pivots a column of the current DataFrame and performs the specified aggregation.

There are two versions of pivot function: one that requires the caller to specify the list of distinct values to pivot on, and one that does not. The latter is more concise but less efficient, because Spark needs to first compute the list of distinct values internally.

// Compute the sum of earnings for each year by course with each course as a separate column df.groupBy("year").pivot("course", Seq("dotNET", "Java")).sum("earnings") // Or without specifying column values (less efficient) df.groupBy("year").pivot("course").sum("earnings")



【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有